12,793 research outputs found

    Intergenerational transmission of language capital and economic outcomes

    Get PDF
    This paper investigates the intergenerational transmission of language capital in immigrant communities from one generation to the next, and the effect of language deficiencies on the economic performance of second generation immigrants. Our analysis is based on a long panel that oversamples immigrants and that allows their children to be followed even after they have left the parental home. Our results show a significant and sizeable association between parental language fluency and that of their children, conditional on a rich set of parental and family background characteristics. We also find that language deficiencies of the children of immigrants are associated with poorer labour market outcomes for females, but not for males. There is a strong relationship between parental language fluency and labour market outcomes for females, which works through the child’s language proficiency

    Intergenerational transmission of language capital and economic outcomes

    Get PDF
    This paper investigates the intergenerational transmission of language capital amongst immigrants, and the effect of language deficiencies on the economic performance of second generation immigrants. Using a long panel that oversamples immigrants, we can follow their children after they have left the parental home. Our results show a sizeable significant association between parents’ and children’s fluency, conditional on parental and family characteristics. We find that language deficiencies of the second generation are associated with poorer labour market outcomes for females only. Finally, we find a strong relationship between parental fluency and female labour market outcomes, which works through the child’s language proficiency

    National and Regional Estimates of the Prevalence of Opiate and/or Crack Cocaine use 2008-09: A summary of key findings

    Get PDF
    <p>This report summarises the results of a follow-up study to a three year project to estimate the prevalence of ‘problem drug use’ (defined as use of opiates and/or crack cocaine) nationally (England only), regionally and locally. The follow-up was carried out two years after the final sweep of the original project, so could therefore be considered as ‘sweep 5’. An overview of the national and regional estimates are presented in this report, as are comparisons with the estimates produced by the third (2006-07) sweep of the study. Estimates for 2007-08 are not available as a study was not commissioned for that year.</p> <p>Information about the number of people who use illicit drugs such as heroin, other opiates or crack cocaine is key to formulating effective policies for tackling drug-related harm as these drugs are associated with the highest levels of harm. It also helps inform service provision at the local level and provides a context in which to understand the population impact of interventions to reduce drug-related harm.</p> <p>Direct enumeration of those engaged in a largely covert activity such as the use of class A drugs is difficult and standard household survey techniques tend to underestimate the extent of such activity. Indirect techniques making use of various data sources offer a more reliable way of calculating prevalence estimates for the use of opiates and/or crack cocaine. The estimates presented in this report are derived using two indirect measurement techniques: the capture-recapture method (CRC ); and the multiple indicator (MIM ) method. These methods are described in detail in Hay et al., 2006 and Hay et al., 2007a. Methodological developments throughout the course of the previous three sweeps are discussed elsewhere (Hay et al., 2007b, Hay et al., 2008). The individuals covered by this study were people aged 15 to 64 and resident in each DAT area, and known to be using heroin, methadone, other opiate drugs or crack cocaine.</p&gt

    UVMULTIFIT: A versatile tool for fitting astronomical radio interferometric data

    Full text link
    The analysis of astronomical interferometric data is often performed on the images obtained after deconvolution of the interferometer's point spread function (PSF). This strategy can be understood (especially for cases of sparse arrays) as fitting models to models, since the deconvolved images are already non-unique model representations of the actual data (i.e., the visibilities). Indeed, the interferometric images may be affected by visibility gridding, weighting schemes (e.g., natural vs. uniform), and the particulars of the (non-linear) deconvolution algorithms. Fitting models to the direct interferometric observables (i.e., the visibilities) is preferable in the cases of simple (analytical) sky intensity distributions. In this paper, we present UVMULTIFIT, a versatile library for fitting visibility data, implemented in a Python-based framework. Our software is currently based on the CASA package, but can be easily adapted to other analysis packages, provided they have a Python API. We have tested the software with synthetic data, as well as with real observations. In some cases (e.g., sources with sizes smaller than the diffraction limit of the interferometer), the results from the fit to the visibilities (e.g., spectra of close by sources) are far superior to the output obtained from the mere analysis of the deconvolved images. UVMULTIFIT is a powerful improvement of existing tasks to extract the maximum amount of information from visibility data, especially in cases close to the sensitivity/resolution limits of interferometric observations.Comment: 10 pages, 4 figures. Accepted in A&A. Code available at http://nordic-alma.se/support/software-tool

    A Comparative Analysis of Hyperspectral Target Detection Algorithms in the Presence of Misregistered Data

    Get PDF
    Line scanning hyperspectral imaging systems are capable of capturing accurate spatial and spectral information about a scene. These data can be useful for detecting sub-pixel targets. Such systems, however, may be limited by certain key characteristics in their design. Systems employing multiple spectrometers, or that collect data from multiple focal planes may suffer an inherent misregistration between sets of collected spectral bands. In order to utilize the full spectrum for target detection purposes, the sets of bands must be registered to each other as precisely as possible. Perfect registration is not possible, due to both the sensor design, and variation in sensor orientation during data acquisition. The issue can cause degradation in the performance of various target detection algorithms. An analysis of algorithms is necessary to determine which perform well when working with misregistered data. In addition, new algorithms may need to be developed which are more robust in these conditions. The work set forth in this thesis will improve the registration between spectral bands in a line scanning hyperspectral sensor by using a geometric model of the sensor along with aircraft orientation parameters to pair sets of image pixels based on their ground locations. Synthetic scenes were created and band-to-band misregistration was induced between the VIS and NIR spectral channels to test the performance of various hyperspectral target detection algorithms when applied to misregistered hyperspectral data. The results for this case studied show geometric algorithms perform well using only the VIS portion of the EM spectrum, and do not always benefit from the addition of NIR bands, even for small amounts of misregistration. Stochastic algorithms appear to be more robust than geometric algorithms for datasets with band-to-band misregistration. The stochastic algorithms tested often benefit from the addition of NIR bands, even for large amounts of misregistration

    Three correlated teaching units on the topic New England.

    Full text link
    Thesis (Ed.M.)--Boston Universit

    Learning the Ropes: A case study of the onboarding process for newly elected city councilors

    Get PDF
    Locally Elected Officials (LEOs) face a steep learning curve when assimilating to their new roles on municipal councils. Their skills and experience prior to being elected often do not align with the skills required, such as municipal budgeting, navigating intricate government processes, and handling a broad range of constituent feedback. While some training is available through organizations such as the National League of Cities and state Municipal Associations, typically, the newly elected official only receives a brief orientation before they must vote on council business. In the private sector, onboarding is a proven process for assimilating new leaders, reducing the learning curve, and minimizing mistakes. Companies like L’Oréal lead the way with programs that ensure talent retention and help new members become effective in their role. Onboarding is now appearing in the lexicon of nonprofit boards and in local government. Yet, there is still a tremendous gap in the depth and breadth of onboarding provided to locally elected officials and a gap in the research that documents this process. I address the gap by examining the onboarding perceptions and experiences of city councilors in Eastwood*, a mid-sized city in the northeast United States. Through a review of the literature, I created an onboarding best-practices elements framework to analyze the onboarding program in Eastwood. The findings reveal that a comprehensive onboarding program is not in place for the Eastwood Council. However, effective elements of onboarding did include: preparation for the role, relationship building, managing information and communication, and navigating roles, power, and process. Based on their experience, I developed a model for onboarding at the local council level. I recommended the following policy changes: 1) the City of Eastwood should develop a customized onboarding program for its council; 2) the City of Eastwood should continue to invest in and improve technology that supports the council. Finally, the City of Eastwood should collaborate with similar organizations to develop a training program for locally elected officials. While the findings are uniquely relevant to the Eastwood Council, other local governments can draw on the findings and conclusions to guide their own inquiry and identify improvements for their councils. *Eastwood is a pseudony

    Quantitative Determination of the Adiabatic Condition Using Force-Detected Nuclear Magnetic Resonance

    Full text link
    The adiabatic condition governing cyclic adiabatic inversion of proton spins in a micron-sized ammonium chloride crystal was studied using room temperature nuclear magnetic resonance force microscopy. A systematic degradation of signal-to-noise was observed as the adiabatic condition became violated. A theory of adiabatic following applicable to cyclic adiabatic inversion is reviewed and implemented to quantitatively determine an adiabaticity threshold (γH1)2/(ωoscΩ)=6.0(\gamma H_1)^2/(\omega_{osc}\Omega) = 6.0 from our experimental results.Comment: 5 pages, 3 fig

    A Dual Read-Out Assay to Evaluate the Potency of Compounds Active against Mycobacterium tuberculosis

    Get PDF
    PMCID: PMC3617142This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited

    Climbing the cosmic ladder with stellar twins

    Full text link
    Distances to stars are key to revealing a three-dimensional view of the Milky Way, yet their determination is a major challenge in astronomy. Whilst the brightest nearby stars benefit from direct parallax measurements, fainter stars are subject of indirect determinations with uncertainties exceeding 30%. We present an alternative approach to measuring distances using spectroscopically-identified twin stars. Given a star with known parallax, the distance to its twin is assumed to be directly related to the difference in their apparent magnitudes. We found 175 twin pairs from the ESO public HARPS archives and report excellent agreement with Hipparcos parallaxes within 7.5%. Most importantly, the accuracy of our results does not degrade with increasing stellar distance. With the ongoing collection of high-resolution stellar spectra, our method is well-suited to complement Gaia.Comment: published online on MNRA
    corecore